News

AI + Social Media

By Stephen McConnell, Adjunct Instructor

Instead of Googling it, we now AI-it. 

Artificial intelligence is rightly all the rage now, especially with the launch of generative technologies like ChatGPT and Perplexity. 

We marvel at how once excruciating tasks — like writing that corporate memo or learning how to bake (at least for this writer) — have become about as easy as a walk in the park, thanks to the magic of these technologies.  

But AI has been subtly linked to our lives, work, memories and news and information for quite some time now, intimately and extensively intertwined in fact and in a far less visible way than ChatGPT.  

For all of us in the UNC Hussman community, if we’re on social media, we’re under the watchful eye of AI.  

When we thumb through our Facebook feed for the latest news and information about our friends or the world, AI is the silent guide charting the course of what posts we will see, essentially deciding for us what posts appear.  

When we hop on TikTok to be mesmerized by the infinite entertaining and informative jest that the platform is, AI is the invisible hand channeling that stream of content to you.  

And when we log onto Instagram and many other social media platforms to communicate with friends and family, AI is there — a digital Wizard of Oz analyzing our every move.  

While ChatGPT, Perplexity and Claude steal our attention, AI and other machine learning technologies have substantially controlled our social media and digital information environments for several years now.  

Thanks to recent AI breakthroughs, it has become more powerful and potentially even more troubling.  

Unlike ChatGPT, this AI is a daily gatekeeper of our reality, forming a calculated portrait of us based on what it perceives are our values, affinities, ideologies and distastes, and from that portrait, chiseling those consequential rivers of content that seamlessly appear on our feeds.  

With billions of users worldwide, including an estimated 3 billion active users on Facebook and 2 billion active users on Instagram, most of the world is now under the sway of an AI of some sort — inconspicuously baked into these platforms, and many may not even know it. 

 As my and other research has shown, the implications are substantial, from potentially placing you in a news and information bubble shaped to your detected interests, to plying you with content it predicts you will like so you stay on the platform longer.  

Now a UNC Hussman faculty member, I had my first inklings of social media’s peril and promise as an investigative reporter at The Scranton Times-Tribune, a daily newspaper in Scranton, Pennsylvania.  

Around 2012, I noticed my editor, John, stroll out of his office one day with a then-novel smartphone in hand. He was thumbing through Twitter and, from all appearances, captivated. That day, I sensed journalism was about to undergo a radical transformation. I was simultaneously inspired and scared. 

A few years later, ensconced in a Ph.D. program at Colorado State University, I began studying what fascinated and perhaps alarmed John, the power of social media as a communication channel. I also learned that a suite of machine learning technologies lurked beneath all those glitzy posts.  

Every like, share, post, photo and video you hand over is food for the algorithm to gobble up and better understand you. These “signals” are then fed into massive server farms to make predictions about what the algorithm thinks you want to see next on your feed, among other recommendations you encounter.  

While our focus is rightly directed at the generative AI technologies making headlines (and billions of dollars) today, these less conspicuous AI technologies equally deserve our attention.  

AI-driven personalization of news and information can present risks to democracy, from exacerbating political polarization to restricting news from appearing on our feeds, where more than half of Americans get their news today, according to Pew Research. We can’t be informed and self-govern if the page is blank, and an inscrutable algorithm made the call. 

My research has also shown how people get lost in “rabbit holes” of content because these AI-driven recommendation engines are so effective at delivering pitch-perfect personalization to them. It’s that good — and that’s by design.  

Because of personalization, we get to see the world we want to see — an indisputable benefit. But it can also shape a synthetic reality, crafted by an opaque algorithm that perhaps only offers us a few pieces of the complicated puzzle of life.